What's new in Seastar - issue 2
I/O #
Unify CPU scheduling groups and IO priority classes #1607 is probably the largest change described in this post. It deprecates the use of I/O priority classes in Seastar APIs. Instead, the current scheduling group is used to drive the I/O priority internally. API level 7 has removed I/O priority from public APIs.
Networking #
TLS: support for extracting certificate subject alt names from client certs #1629 allows Seastar applications to grab additional metadata–in this case subject alternative names–from TLS connections. Along with metadata like distinguished names, such metadata is often useful when implementing things like authorization or authentication policies.
This is the new API. By default all the names will be returned, but can also be filtered by type.
future<std::vector<subject_alt_name>> get_alt_name_information(
connected_socket& socket, std::unordered_set<subject_alt_name_type> types = {});
where the supported types are represented by this enumeration:
enum class subject_alt_name_type {
dnsname = 1,
rfc822name,
uri,
ipaddress,
othername,
dn,
};
Seastar string improvements #
sstring: add == comparison operators #1654 enables more comparison cases (e.g. char*
vs seastar::sstring
) that weren’t auto-generated before because of some nuance with the existence of the !=operator()
(lots of good details in the PR cover letter).
Provide kernel symbols to seastar-addr2line
#
The seastar-addr2line
script is of course the tool we all use to decode backtraces (e.g. reactor stalls) produced by Seastar applications. Traces for reactor stalls, in particular, may include symbols from the kernel, and these traces are decoded by using the output from /proc/kallsyms
which maps addresses to symbols for the currently running kernel. However, it is very common for traces to be decoded on a machine other than that where the trace was created, which in most cases means a different kernel is running.
scripts: addr2line: allow specifying kallsyms path #1644 addresses this problem by allowing a snapshot of /proc/kallsyms
to be taken and provided to the seastar-addr2line
tool which it will use instead of reading /proc/kallsyms
directly. Use it like this:
$> seastar-addr2line --kallsyms /path/to/kallsyms-snapshot
The default value for --kallsyms
will continue to be /proc/kallsyms
.
Seastar RPC system #
-
rpc: add APIs for connection dropping #1663 exposes connection information to handlers allowing connections to be dropped via
rpc::server::abort_connection
. -
Split rpc::server stop into two parts #1671 splits shutdown of the RPC server into two phases. The first phase (
shutdown
) stops new requests from being accepted, and drops any pending replies. The second phase (stop
) additionally waits for all pending handlers to finish. The former is useful for quickly isolating a server from the network, while the later is typically what would be used to wait on references to be dropped prior to releasing the server resources.
It appears both of these Seastar features are being used by Scylla in https://github.com/scylladb/scylladb/pull/13850 to prevent nodes that have been kicked out of the cluster from being able to communicate with the remaining nodes.
HTTP client improvements #
-
Improve http::reply re-allocations and copying in client #1657 contains clean-ups and memory allocation optimizations for the Seastar HTTP client.
-
Limit the number of http client opened connections #1652 adds the ability to control the number of connections (active or idel) that the Seastar HTTP client has open concurrently. Prior to this PR, the client would open as many connections as it needed to complete a request.
-
http/client: Introduce unexpected_status_error for client requests #1670 changes
http::client::make_request()
such that it returns an exceptional future containingunexpected_status_error
when the status of the reply is not the expected status.
Support for C++ modules #
Pull request treewide: add C++ modules support #1605 adds support for compiling Seastar as a C++ module using CMake. There is an example in tree, and as expected, it’s fire.
Here’s the code for the example:
import seastar;
using namespace seastar;
logger applog("app");
int main(int argc, char** argv) {
seastar::app_template app;
app.run(argc, argv, [] () -> future<> {
applog.info("Hello world!");
return make_ready_future<>();
});
}
And this is what it looks like building with CMake:
add_executable (hello_cxx_module)
target_sources (hello_cxx_module
PRIVATE
hello-cxx-module.cc)
target_link_libraries (hello_cxx_module
PRIVATE
seastar-module)
To be honest I have mostly been a curious onlooker of C++ modules. And while I think I can articulate the basic advantages of the feature, I don’t have much of any hands-on time with modules.
There is a new CMake build option Seastar_MODULE
which turns on the feature. Seastar will likely continue to support the traditional non-module builds for quite some time as compiler and build system support is still very new and not widely available.
Performance tests support markdown output #
Pull request tests: perf: add a markdown printer #1684 adds support for producing performance test results in markdown table format. Now performance results can conveniently be copy-pasted into destinations like blogs and github issues.
Before the results looked like this:
$> ./tests/perf/coroutine_perf
test iterations median mad min max allocs tasks inst
coroutine_test.empty 155424209 6.450ns 0.003ns 6.446ns 6.464ns 1.000 0.000 161.1
coroutine_test.without_preemption_check 124847404 8.010ns 0.005ns 8.005ns 8.034ns 1.000 0.000 209.1
coroutine_test.ready 120188885 8.316ns 0.001ns 8.316ns 8.322ns 1.000 0.000 217.1
coroutine_test.maybe_yield 128703519 7.771ns 0.002ns 7.769ns 7.802ns 1.000 0.000 200.1
And now when markdown output is requested the output is a markdown table:
$> ./tests/perf/coroutine_perf --md-output - --no-stdout
| test | iterations | median | mad | min | max | allocs | tasks | inst |
| - | - | - | - | - | - | - | - | - |
| coroutine_test.empty | 116194476 | 8.612ns | 0.005ns | 8.602ns | 8.618ns | 1.000 | 0.000 | 161.1 |
| coroutine_test.without_preemption_check | 124883233 | 7.974ns | 0.001ns | 7.973ns | 7.999ns | 1.000 | 0.000 | 209.1 |
| coroutine_test.ready | 119721112 | 8.324ns | 0.004ns | 8.318ns | 8.329ns | 1.000 | 0.000 | 217.1 |
| coroutine_test.maybe_yield | 128265657 | 7.760ns | 0.000ns | 7.759ns | 7.761ns | 1.000 | 0.000 | 200.1 |
Which will render nicely like this:
test | iterations | median | mad | min | max | allocs | tasks | inst |
---|---|---|---|---|---|---|---|---|
coroutine_test.empty | 116194476 | 8.612ns | 0.005ns | 8.602ns | 8.618ns | 1.000 | 0.000 | 161.1 |
coroutine_test.without_preemption_check | 124883233 | 7.974ns | 0.001ns | 7.973ns | 7.999ns | 1.000 | 0.000 | 209.1 |
coroutine_test.ready | 119721112 | 8.324ns | 0.004ns | 8.318ns | 8.329ns | 1.000 | 0.000 | 217.1 |
coroutine_test.maybe_yield | 128265657 | 7.760ns | 0.000ns | 7.759ns | 7.761ns | 1.000 | 0.000 | 200.1 |
Improved formatting of seastar-json2code.py
generated code #
Pull request seastar-json2code: generate better-formatted code #1538 improves the formatting of the code generated by the seastar-json2code.py
script.
Using an example in the Seastar source tree, here is the formatting of the code generated before the change:
/**
* Demonstrate an enum returned, note this is not the same enum type of the request
*/
enum class my_object_enum_var { VAL1, VAL2, VAL3, NUM_ITEMS};
struct enum_var_wrapper : public json::jsonable {
enum_var_wrapper() = default;
virtual std::string to_json() const {
switch(v) {
case my_object_enum_var::VAL1: return "\"VAL1\"";
case my_object_enum_var::VAL2: return "\"VAL2\"";
case my_object_enum_var::VAL3: return "\"VAL3\"";
default: return "\"Unknown\"";
}
}
template<class T>
enum_var_wrapper (const T& _v) {
switch(_v) {
case T::VAL1: v = my_object_enum_var::VAL1; break;
case T::VAL2: v = my_object_enum_var::VAL2; break;
case T::VAL3: v = my_object_enum_var::VAL3; break;
default: v = my_object_enum_var::NUM_ITEMS;
}
}
template<class T>
operator T() const {
switch(v) {
case my_object_enum_var::VAL1: return T::VAL1;
case my_object_enum_var::VAL2: return T::VAL2;
case my_object_enum_var::VAL3: return T::VAL3;
default: return T::VAL1;
}
}
typedef typename std::underlying_type<my_object_enum_var>::type pos_type;
enum_var_wrapper& operator++() {
v = static_cast<my_object_enum_var>(static_cast<pos_type>(v) + 1);
return *this;
}
And here it is with the improved formatting:
/**
* Demonstrate an enum returned, note this is not the same enum type of the request
*/
enum class my_object_enum_var { VAL1, VAL2, VAL3, NUM_ITEMS };
struct enum_var_wrapper : public json::jsonable {
enum_var_wrapper() = default;
virtual std::string to_json() const {
switch(v) {
case my_object_enum_var::VAL1: return "\"VAL1\"";
case my_object_enum_var::VAL2: return "\"VAL2\"";
case my_object_enum_var::VAL3: return "\"VAL3\"";
default: return "\"Unknown\"";
}
}
template<class T>
enum_var_wrapper(const T& _v) {
switch(_v) {
case T::VAL1: v = my_object_enum_var::VAL1; break;
case T::VAL2: v = my_object_enum_var::VAL2; break;
case T::VAL3: v = my_object_enum_var::VAL3; break;
default: v = my_object_enum_var::NUM_ITEMS;
}
}
template<class T>
operator T() const {
switch(v) {
case my_object_enum_var::VAL1: return T::VAL1;
case my_object_enum_var::VAL2: return T::VAL2;
case my_object_enum_var::VAL3: return T::VAL3;
default: return T::VAL1;
}
}
typedef typename std::underlying_type<my_object_enum_var>::type pos_type;
enum_var_wrapper& operator++() {
v = static_cast<my_object_enum_var>(static_cast<pos_type>(v) + 1);
return *this;
}
Way better!